150 research outputs found

    Importance Sampling for a Markov Modulated Queuing Network with Customer Impatience until the End of Service

    Get PDF
    For more than two decades, there has been a growing of interest in fast simulation techniques for estimating probabilities of rare events in queuing networks. Importance sampling is a variance reduction method for simulating rare events. The present paper carries out strict deadlines to the paper by Dupuis et al for a two node tandem network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We derive a closed form solution for the probability of missing deadlines. Then we have employed the results to an importance sampling technique to estimate the probability of total population overflow which is a rare event. We have also shown that the probability of this rare event may be affected by various deadline values.Importance Sampling, Queuing Network, Rare Event, Markov Process, Deadline

    An efficient filter with low memory usage for multimedia data of industrial Internet of Things

    Get PDF
    One of the essential concerns of Internet of Things (IoT) is in industrial systems or data architecture to support the evolutions in transportation and logistics. Considering the Industrial IoT (IIoT) openness, the need for accessibility, availability, and searching of data has rapidly increased. The primary purpose of this research is to propose an Efficient Two-Dimensional Filter (ETDF) to store multimedia data of IIoT applications in a specific format to achieve faster response and dynamic updating. This filter consists of a two-dimensional array and a hash function integrated into a cuckoo filter for efficient use of memory. This study evaluates the scalability of the filter by increasing the number of requests from 10,000 to 100,000. To assess the performance of the proposed filter, we measure the parameters of access time and lookup message latency. The results show that the proposed filter improves the access time by 12%, compared to a Fast Two-Dimensional Filter (FTDF). Moreover, it improves memory usage by 20% compared to FTDF. Experiments indicate a better access time of the proposed filter compared to other filters (i.e., Bloom, quotient, cuckoo, and FTD filters). Insertion and deletion times are essential parameters in comparing filters, so they are also analyzed

    Software defined fog platform

    Get PDF
    In recent years, the number of end users connected to the internet of things (IoT) has increased, and we have witnessed the emergence of the cloud computing paradigm. These users utilize network resources to meet their quality of service (QoS) requirements, but traditional networks are not configured to backing maximum of scalability, real-time data transfer, and dynamism, resulting in numerous challenges. This research presents a new platform of IoT architecture that adds the benefits of two new technologies: software-defined networking and fog paradigm. Software-defined networking (SDN) refers to a centralized control layer of the network that enables sophisticated methods for traffic control and resource allocation. So, fog paradigm allows for data to be analyzed and managed at the edge of the network, making it suitable for tasks that require low and predictable delay. Thus, this research provides an in-depth view of the platform organize and performance of its base ingredients, as well as the potential uses of the suggested platform in various applications

    Resource allocation for fog computing based on software-defined networks

    Get PDF
    With the emergence of cloud computing as a processing backbone for internet of thing (IoT), fog computing has been proposed as a solution for delay-sensitive applications. According to fog computing, this is done by placing computing servers near IoT. IoT networks are inherently very dynamic, and their topology and resources may be changed drastically in a short period. So, using the traditional networking paradigm to build their communication backbone, may lower network performance and higher network configuration convergence latency. So, it seems to be more beneficial to employ a software-defined network paradigm to implement their communication network. In software-defined networking (SDN), separating the network’s control and data forwarding plane makes it possible to manage the network in a centralized way. Managing a network using a centralized controller can make it more flexible and agile in response to any possible network topology and state changes. This paper presents a software-defined fog platform to host real-time applications in IoT. The effectiveness of the mechanism has been evaluated by conducting a series of simulations. The results of the simulations show that the proposed mechanism is able to find near to optimal solutions in a very lower execution time compared to the brute force method

    DTMP: Energy Consumption Reduction in Body Area Networks Using a Dynamic Traffic Management Protocol

    Get PDF
    Advances in medical sciences with other fields of science and technology is closely casual profound mutations in different branches of science and methods for providing medical services affect the lives of its descriptor. Wireless Body Area Network (WBAN) represents such a leap. Those networks excite new branches in the world of telemedicine. Small wireless sensors, to be quite precise and calculated, are installed in or on the body and create a WBAN that various vital statistics or environmental parameters sampling, processing and radio. These nodes allow independent monitoring of a person's location, in typical environments and for long periods and provide for the user and the medical, offer real-time feedback from the patient's health status. In this article, the introduction of WBAN and review issues and applications of medical sensor networks, to offer a protocol has been established that the threshold for data transmission reduces power consumption on sensor nodes, increasing the lifetime of the network and motion phase to increase the dynamics of the network. The proposed protocol in the network been compared with the SIMPLE and ATTEMPT protocols. Results indicate a significant reduction in energy consumption of sensors to reduce energy consumption the entire network

    Task Scheduling Algorithm Using Covariance Matrix Adaptation Evolution Strategy (CMA-ES) in Cloud Computing

    Get PDF
    The cloud computing is considered as a computational model which provides the uses requests with resources upon any demand and needs.The need for planning the scheduling of the user's jobs has emerged as an important challenge in the field of cloud computing. It is mainly due to several reasons, including ever-increasing advancements of information technology and an increase of applications and user needs for these applications with high quality, as well as, the popularity of cloud computing among user and rapidly growth of them during recent years. This research presents the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm in the field of optimization for tasks scheduling in the cloud computing environment. The findings indicate that presented algorithm, led to a reduction in execution time of all tasks, compared to SPT, LPT, and RLPT algorithms.Keywords: Cloud Computing, Task Scheduling, Virtual Machines (VMs), Covariance Matrix Adaptation Evolution Strategy (CMA-ES
    corecore